Audio–visual perception‐based multimodal HCI
نویسندگان
چکیده
منابع مشابه
BattleView: A Multimodal HCI Research Application
To demonstrate some of our research topics in Human Computer Interaction (HCI), we employ two modes of natural human-computer interaction to control a virtual environment. By using speech and gesture recognition, we outline the control of a virtual environment research testbed (BattleView) without the need for traditional virtual reality interfaces such a wand, mouse, or keyboard. The use of fe...
متن کاملMultimodal emotion recognition in audiovisual communication
This paper discusses innovative techniques to automatically estimate a user's emotional state analyzing the speech signal and haptical interaction on a touch-screen or via mouse. The knowledge of a user's emotion permits adaptive strategies striving for a more natural and robust interaction. We classify seven emotional states: surprise, joy, anger, fear, disgust, sadness, and neutral user state...
متن کاملA Multimodal Approach to Audiovisual Text-to-Speech Synthesis
Oral speech has always been the most important means of communication between humans. When a message is conveyed using oral speech, it is encoded in two separate signals: an auditory speech signal and a visual speech signal. The auditory speech signal consists of a series of speech sounds that are produced by the human speech production system. In order to generate different sounds, the paramet...
متن کاملDimensional mapping of multimodal integration on audiovisual emotion perception
The aim of this research was to investigate what emotions are perceived from incongruent vocal and facial emotional expressions as an integrated emotional expression. Our approach is unimodal and bimodal perceptual emotional information mapping to dimensions of emotional space created with principal component analysis (PCA). Unimodal perception tests and a bimodal congruent/incongruent percepti...
متن کاملComputational Pragmatics in HCI: Using Dialog Context in a MultiModal Interface*
This report describes a prototype built to explore the use of context representation in a multi-modal interface. The report focuses on the definition of new graphical interaction techniques that make use of context information for context-based interaction. This report describes the motivations for such techniques, it presents some sample techniques and their natural language equivalents. The r...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: The Journal of Engineering
سال: 2018
ISSN: 2051-3305,2051-3305
DOI: 10.1049/joe.2017.0333